Filesystem over S3 : s3backer
4 minutes read | 741 words
I have recently been experimenting with S3 compatible storage, in an effort to find the most efficient way to expand the storage my server has on offer.
After scouring the web for options, and briefly settling on S3F, I discovered s3backer.
The Problem
Storage in S3 happen via “objects” in “buckets”. Each object is at least 4 kB in size, with no upper limit as far as I know. But objects cannot be updated. Suppose you store myFile.odt as an object on your S3 cloud. If you then edit the file, and want to store the changes on your S3, it will first discard the copy it already has, and require you to push the entire file again to cloud. While this is not a problem for read only data, this quickly adds a lot of overhead to “hot data” : data that is read from and written to often. Not only is this method wasteful for repeated rewrites of a file, in my experiments writing big files directly to S3 (over S3FS) was also painfully slow.

